AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Efficient GGUF inference

# Efficient GGUF inference

Wan2.1 14B VACE GGUF
Apache-2.0
The GGUF format version of the Wan2.1-VACE-14B model, mainly used for text-to-video generation tasks.
Text-to-Video
W
QuantStack
146.36k
139
Codellama 7B Python GGUF
CodeLlama 7B Python is a large language model with 7B parameters developed by Meta, focusing on Python code generation. It provides a quantized version in the GGUF format.
Large Language Model Transformers
C
TheBloke
2,385
57
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase